Skip to content

chore: add dedicated claude skill for RNE#800

Merged
msluszniak merged 7 commits intomainfrom
@ml/add-rne-claude-skill
Feb 17, 2026
Merged

chore: add dedicated claude skill for RNE#800
msluszniak merged 7 commits intomainfrom
@ml/add-rne-claude-skill

Conversation

@mateuszlampert
Copy link
Contributor

Description

This PR adds a Claude Skill for RN Executorch that can help with building, prototyping and debugging RNE apps.

Introduces a breaking change?

  • Yes
  • No

Type of change

  • Bug fix (change which fixes an issue)
  • New feature (change which adds functionality)
  • Documentation update (improves or adds clarity to existing documentation)
  • Other (chores, tests, code style improvements etc.)

Tested on

  • iOS
  • Android

Testing instructions

The same version of this skill was uploaded to this repository so it's possible to use this skill globally now.

To do this run:

npx skills add software-mansion-labs/react-native-skills 

(after merging this PR it will be possible to add this skill to the project with npx skills add software-mansion/react-native-executorch)

Screenshots

Related issues

Checklist

  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have updated the documentation accordingly
  • My changes generate no new warnings

Additional notes

@mateuszlampert mateuszlampert self-assigned this Feb 10, 2026
@mateuszlampert mateuszlampert linked an issue Feb 10, 2026 that may be closed by this pull request
@mateuszlampert mateuszlampert added the chore PRs that are chores label Feb 10, 2026
Comment on lines 209 to 337
Use `useLLM` with tool definitions to allow the model to call predefined functions.

**What to do:**

1. Define tools with name, description, and parameter schema
2. Configure the LLM with tool definitions
3. Implement callbacks to execute tools when the model requests them
4. Parse tool results and pass them back to the model

**Reference:** [./references/reference-llms.md](./references/reference-llms.md) - Tool Calling section

---

### I want structured data extraction from text

Use `useLLM` with structured output generation using JSON schema validation.

**What to do:**

1. Define a schema (JSON Schema or Zod) for desired output format
2. Configure the LLM with the schema
3. Generate responses and validate against the schema
4. Use the validated structured data in your app

**Reference:** [./references/reference-llms.md](./references/reference-llms.md) - Structured Output section

---

### I want to classify or recognize objects in images

Use `useClassification` for simple categorization or `useObjectDetection` for locating specific objects.

**What to do:**

1. Choose appropriate computer vision model based on task
2. Load the model with the appropriate hook
3. Pass image URI (local, remote, or base64)
4. Process results (classifications, detections with bounding boxes)

**Reference:** [./references/reference-cv.md](./references/reference-cv.md)

**Model options:** [./references/reference-models.md](./references/reference-models.md) - Classification and Object Detection sections

---

### I want to extract text from images

Use `useOCR` for horizontal text or `useVerticalOCR` for vertical text (experimental).

**What to do:**

1. Choose appropriate OCR model and recognizer matching your target language
2. Load the model with `useOCR` or `useVerticalOCR` hook
3. Pass image URI
4. Extract detected text regions with bounding boxes and confidence scores
5. Process results based on your application needs

**Reference:** [./references/reference-ocr.md](./references/reference-ocr.md)

**Model options:** [./references/reference-models.md](./references/reference-models.md) - OCR section

---

### I want to convert speech to text or text to speech

Use `useSpeechToText` for transcription or `useTextToSpeech` for voice synthesis.

**What to do:**

- **For Speech-to-Text:** Capture or load audio, ensure 16kHz sample rate, transcribe
- **For Text-to-Speech:** Prepare text, specify voice parameters, generate audio waveform, play using audio context

**Reference:** [./references/reference-audio.md](./references/reference-audio.md)

**Model options:** [./references/reference-models.md](./references/reference-models.md) - Speech to Text and Text to Speech sections

---

### I want to find similar images or texts

Use `useImageEmbeddings` for images or `useTextEmbeddings` for text.

**What to do:**

1. Load appropriate embeddings model
2. Generate embeddings for your content
3. Compute similarity metrics (cosine similarity, dot product)
4. Use similarity scores for search, clustering, or deduplication

**Reference:**

- Text: [./references/reference-nlp.md](./references/reference-nlp.md)
- Images: [./references/reference-cv-2.md](./references/reference-cv-2.md)

---

### I want to apply artistic filters to photos

Use `useStyleTransfer` to apply predefined artistic styles to images.

**What to do:**

1. Choose from available artistic styles (Candy, Mosaic, Udnie, Rain Princess)
2. Load the style transfer model
3. Pass image URI
4. Retrieve and use the stylized image

**Reference:** [./references/reference-cv-2.md](./references/reference-cv-2.md)

**Model options:** [./references/reference-models.md](./references/reference-models.md) - Style Transfer section

---

### I want to generate images from text

Use `useTextToImage` to create images based on text descriptions.

**What to do:**

1. Load the text-to-image model
2. Provide text description (prompt)
3. Optionally specify image size and number of generation steps
4. Receive generated image (may take 20-60 seconds depending on device)

**Reference:** [./references/reference-cv-2.md](./references/reference-cv-2.md)

**Model options:** [./references/reference-models.md](./references/reference-models.md) - Text to Image section

---
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please rephrase all of these so there is an information about respective TS api class, not just about the hook.

Copy link
Contributor Author

@mateuszlampert mateuszlampert Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think that we should directly mention all Typescipt API equivalents in all reference files as well? For now I followed an approach where I describe hooks in detail and mention Typescript API implementation in additional resources section (with link to docs) only


1. Choose appropriate computer vision model based on task
2. Load the model with the appropriate hook
3. Pass image URI (local, remote, or base64)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as Jakub suggested earlier


1. Choose from available artistic styles (Candy, Mosaic, Udnie, Rain Princess)
2. Load the style transfer model
3. Pass image URI
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And here as well (probably every CV task)

@darthez
Copy link
Collaborator

darthez commented Feb 11, 2026

Let's add this skill also to https://mcpmarket.com/submit?type=skill @mateuszlampert

Copy link
Member

@msluszniak msluszniak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After merging time stamping in speech to text I think that some examples are not up to date

@mateuszlampert
Copy link
Contributor Author

mateuszlampert commented Feb 16, 2026

@msluszniak I added a canary version of the skill (with updated STT references) so it's easier to keep track of changes before releasing new version of the package. IMO we should separate this, as users will mostly use the latest, not next version of the package.

The canary skill should not be visible to tools like skills.sh

@msluszniak
Copy link
Member

Now parameter in imageSegmentation is not called resize anymore. Could you correct canary version for this one?

@msluszniak msluszniak force-pushed the @ml/add-rne-claude-skill branch from a4b0af6 to a9f0a72 Compare February 16, 2026 14:02
Copy link
Collaborator

@chmjkb chmjkb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall looks good 👏🏻

@msluszniak msluszniak merged commit 8f9ca75 into main Feb 17, 2026
3 checks passed
@msluszniak msluszniak deleted the @ml/add-rne-claude-skill branch February 17, 2026 10:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

chore PRs that are chores

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Create dedicated React Native ExecuTorch Claude skill

4 participants